20 research outputs found

    Conceptual data sampling for image segmentation- an application for breast cancer images

    Get PDF
    At the present time data analytics have become a buzzword for the in- formation technology sector. In an attempt to analyze data; one may follow various paths. Be it deploying sophisticated technologies to process big data or using commodity hardware while applying data reduction/sampling techniques to draw meaningful insights from a data. In this thesis, we aim to reduce data size in terms of th e number of tuples/objects for a given data. Our method has driven its roots from formal concept analysis (FCA); which is a mathemat- ical framework for data analysis. The proposed transformation is preserving functional dependencies/implications in a database. Consequently, we can gen- erate a much smaller data sample that is able to help in making decisions. In this study, we analyze a variety of reduction methods in order to recognize the best one(s), including randomized object selection procedures. The accu- racy of the decision s made on generated sample is comparable to accuracy of the decision made of whole/original data. To illustrate the concept we have chosen data from medical image domain. The data used for experimentation contains microscopic images of breast cancer that need to be segmented into two categories; i.e. benign or malignant. Extensive set of experiments have been performed to show the strength of the proposed reduction method

    Bi-Encoders based Species Normalization -- Pairwise Sentence Learning to Rank

    Full text link
    Motivation: Biomedical named-entity normalization involves connecting biomedical entities with distinct database identifiers in order to facilitate data integration across various fields of biology. Existing systems for biomedical named entity normalization heavily rely on dictionaries, manually created rules, and high-quality representative features such as lexical or morphological characteristics. However, recent research has investigated the use of neural network-based models to reduce dependence on dictionaries, manually crafted rules, and features. Despite these advancements, the performance of these models is still limited due to the lack of sufficiently large training datasets. These models have a tendency to overfit small training corpora and exhibit poor generalization when faced with previously unseen entities, necessitating the redesign of rules and features. Contribution: We present a novel deep learning approach for named entity normalization, treating it as a pair-wise learning to rank problem. Our method utilizes the widely-used information retrieval algorithm Best Matching 25 to generate candidate concepts, followed by the application of bi-directional encoder representation from the encoder (BERT) to re-rank the candidate list. Notably, our approach eliminates the need for feature-engineering or rule creation. We conduct experiments on species entity types and evaluate our method against state-of-the-art techniques using LINNAEUS and S800 biomedical corpora. Our proposed approach surpasses existing methods in linking entities to the NCBI taxonomy. To the best of our knowledge, there is no existing neural network-based approach for species normalization in the literature

    A Deep Learning Based Scalable and Adaptive Feature Extraction Framework for Medical Images

    Get PDF
    Features extraction has a fundamental value in enhancing the scalability and adaptability n of medical image processing framework. The outcome of this stage has a tremendous effect on the reliability of the medical application being developed, particularly disease classification and prediction. The challenging side of features extraction frameworks, in relation to medical images, is influenced by the anatomical and morphological structure of the image which requires a powerful extraction system that highlights high- and low- level features. The complementary of both feature types reinforces the medical image content-based retrieval and allows to access visible structures as well as an in-depth understanding of related deep hidden components. Several existing techniques have been used towards extracting high- and low-level features separately, including Deep Learning based approaches. However, the fusion of these features remains a challenging task. Towards tackling the drawback caused by the lack of features combination and enhancing the reliability of features extraction methods, this paper proposes a new hybrid features extraction framework that focuses on the fusion and optimal selection of high- and low-level features. The scalability and reliability of the proposed method is achieved by the automated adjustment of the final optimal features based on real-time scenarios resulting an accurate and efficient medical images disease classification. The proposed framework has been tested on two different datasets to include BraTS and Retinal sets achieving an accuracy rate of 97% and 98.9%, respectively

    Conceptual data sampling for breast cancer histology image classification

    Get PDF
    Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure

    Standardised practices in the networked management of congenital hyperinsulinism: a UK national collaborative consensus

    Get PDF
    Congenital hyperinsulinism (CHI) is a condition characterised by severe and recurrent hypoglycaemia in infants and young children caused by inappropriate insulin over-secretion. CHI is of heterogeneous aetiology with a significant genetic component and is often unresponsive to standard medical therapy options. The treatment of CHI can be multifaceted and complex, requiring multidisciplinary input. It is important to manage hypoglycaemia in CHI promptly as the risk of long-term neurodisability arising from neuroglycopaenia is high. The UK CHI consensus on the practice and management of CHI was developed to optimise and harmonise clinical management of patients in centres specialising in CHI as well as in non-specialist centres engaged in collaborative, networked models of care. Using current best practice and a consensus approach, it provides guidance and practical advice in the domains of diagnosis, clinical assessment and treatment to mitigate hypoglycaemia risk and improve long term outcomes for health and well-being

    Convalescent plasma in patients admitted to hospital with COVID-19 (RECOVERY): a randomised controlled, open-label, platform trial

    Get PDF
    SummaryBackground Azithromycin has been proposed as a treatment for COVID-19 on the basis of its immunomodulatoryactions. We aimed to evaluate the safety and efficacy of azithromycin in patients admitted to hospital with COVID-19.Methods In this randomised, controlled, open-label, adaptive platform trial (Randomised Evaluation of COVID-19Therapy [RECOVERY]), several possible treatments were compared with usual care in patients admitted to hospitalwith COVID-19 in the UK. The trial is underway at 176 hospitals in the UK. Eligible and consenting patients wererandomly allocated to either usual standard of care alone or usual standard of care plus azithromycin 500 mg once perday by mouth or intravenously for 10 days or until discharge (or allocation to one of the other RECOVERY treatmentgroups). Patients were assigned via web-based simple (unstratified) randomisation with allocation concealment andwere twice as likely to be randomly assigned to usual care than to any of the active treatment groups. Participants andlocal study staff were not masked to the allocated treatment, but all others involved in the trial were masked to theoutcome data during the trial. The primary outcome was 28-day all-cause mortality, assessed in the intention-to-treatpopulation. The trial is registered with ISRCTN, 50189673, and ClinicalTrials.gov, NCT04381936.Findings Between April 7 and Nov 27, 2020, of 16 442 patients enrolled in the RECOVERY trial, 9433 (57%) wereeligible and 7763 were included in the assessment of azithromycin. The mean age of these study participants was65·3 years (SD 15·7) and approximately a third were women (2944 [38%] of 7763). 2582 patients were randomlyallocated to receive azithromycin and 5181 patients were randomly allocated to usual care alone. Overall,561 (22%) patients allocated to azithromycin and 1162 (22%) patients allocated to usual care died within 28 days(rate ratio 0·97, 95% CI 0·87–1·07; p=0·50). No significant difference was seen in duration of hospital stay (median10 days [IQR 5 to >28] vs 11 days [5 to >28]) or the proportion of patients discharged from hospital alive within 28 days(rate ratio 1·04, 95% CI 0·98–1·10; p=0·19). Among those not on invasive mechanical ventilation at baseline, nosignificant difference was seen in the proportion meeting the composite endpoint of invasive mechanical ventilationor death (risk ratio 0·95, 95% CI 0·87–1·03; p=0·24).Interpretation In patients admitted to hospital with COVID-19, azithromycin did not improve survival or otherprespecified clinical outcomes. Azithromycin use in patients admitted to hospital with COVID-19 should be restrictedto patients in whom there is a clear antimicrobial indication

    Biomedical Information Extraction with Deep Neural Models

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Biomedical literature contains a wealth of knowledge in the form of unstructured articles and patents. Scientists find it hard to keep up to date with the literature being published. To further research and avoid repetition, published literature must be reviewed. Structured knowledge bases allow easy access to knowledge by avoiding manual screening of a text document. Knowledge base construction requires curation of literature either manually or automatically. Manual curation of published literature for acquiring knowledge is tedious and time-consuming. Furthermore, manual curation cannot keep up with rapidly growing literature, which calls for research in developing tools to automatically extract information from research articles. This thesis aims to identify entities and relations specific to the ChEBI ontology in publication abstracts. It includes identifying species, metabolites, proteins and chemicals and their relations, namely, `Metabolite of', `Associated With', `Isolated From' and `Binds With'. Current approaches for biomedical information extraction rely on syntactic rules, dictionary matching or domain-specific features, resultling in highly specialised and often non-generalisable. Approaches. Deep learning methods, on the other hand, are capable of feature extraction. This thesis proposes deep learning methods for named entity recognition/normalisation and relation extraction. A knowledge graph has been constructed for storing and querying the extracted knowledge. This thesis makes three contributions to knowledge: Deep Contextualized Neural Embeddings for ChemNER, Bi-Encoders based learning to rank for entity normalisation and Pre-trained transformers for ChEBI relation extraction. Contribution 1 proposes and evaluates improved word representations for named entity recognition using the Bi-LSTM-CRF network by including embeddings from language models in its input representations. The proposed method is evaluated on two abstract and two patent corpora and established state-of-the-art results on the abstract corpora. Contribution 2 develops and evaluates a transformer-based ranking method based on the BERT architecture for the named entity normalisation task for linking species to the NCBI taxonomy. Note that species to NCBI taxonomy identifiers are linked by generating candidates using the information retrieval algorithm BM25 and then re-ranking based on encoder representations from transformers. The proposed method has been evaluated on S800 and LINNAEUS corpora and outperforms existing methods for species normalisation. Contribution 3 proposed and evaluated transformer-based models for ChEBI relation extraction. Finetuning and task-specific feature extraction approaches are proposed, and both are compared. Empirical evidence suggests that finetuning is better when the target data is small

    Adaptive Backstepping Based Sensor and Actuator Fault Tolerant Control of a Manipulator

    No full text
    The purpose of this research is to propose and design fault tolerant control (FTC) scheme for a robotic manipulator, to increase its reliability and performance in the presence of actuator and sensor faults. To achieve the said objectives, a hybrid control law relying on observer and hardware redundancy-based technique has been formulated in this paper. Non-linear observers are designed to estimate the unknown states. The comparison of actual states and observed states lead to fault identification, this is followed by fault tolerance accomplished with redundant sensors. For actuator fault tolerance, fault estimation and controller reconfiguration techniques are applied in addition to nominal control law. Fault estimation is based on adaptive back-stepping technique and it is further used to construct actuator fault tolerant control. The proposed method is applied to a six degree of freedom (DOF) robotic manipulator model and the effectiveness of this technique is verified by LabVIEW simulations. Simulation results witnessed the improved tracking performance in the presence of actuator and sensor failures
    corecore